Meeting

A Strategy for U.S. Competitiveness in Innovation

Tuesday, June 25, 2024
Jason Lee/REUTERS
Speakers

Former Senior Counselor to Secretary Gina Raimondo, U.S. Department of Commerce; CFR Member (speaking virtually)

President and Chief Executive Officer, Special Competitive Studies Project

Presider

Chief Executive Officer, The Atlantic; CFR Member

Series on Emerging Technology, U.S. Foreign Policy, and World Order

Innovation power is playing a critical role in today’s world order, affecting global economies, militaries, and societies. Panelists discuss the strategy needed for the United States to compete in this space to ensure its national security, economic prosperity, and global influence.
 

THOMPSON: Greetings everybody. I’m Nicholas Thompson. I’m CEO of the Atlantic, former editor of Wired. We are here for what will be an amazing conversation on one of the most important topics in the world. 

I’m joined by Ylli Bajraktari. He’s president and chief executive officer of the Special Competitive Studies Project. Greetings. How are you? 

BAJRAKTARI: Good to see you. Thank you for coming.  

THOMPSON: And we are joined by a much larger Zoë Baird, who is, as of about a week ago, former senior counselor to Secretary Raimondo. She was working there until last Friday? When did you—what was your last day?  

BAIRD: A week ago Friday.  

THOMPSON: A week ago Friday. So fresh and able to spill all of her secrets now, immediately. (Laughter.) All right, we’re going to talk about U.S.-China, competition. We’re going to talk about where things stand in artificial intelligence. We’re going to talk about some of the things that build up to that, including energy policy, how it feeds into artificial intelligence. And we’ll talk about geopolitics—not just U.S.-China, but how tech policy affects dynamics between lots of different countries. We have two amazing experts. Let’s get cracking. 

Ylli, I want to start with you. You have just published a huge, long document, modeled, I believe, on NSC-68?  

BAJRAKTARI: Correct.  

THOMPSON: Wow. Small ambitions. Explain what this document says, why you wrote it, why it matters.  

BAJRAKTARI: Thanks, Nick. It’s really an honor and pleasure to be here, and especially being on the panel—even though virtually—with Zoë. So good to see you, Zoë. 

So about three weeks ago, Special Competitive Studies Project published a report called Mid-Decade Opportunities for Strategic Victory. And, as you mentioned, we modeled it after a famous Paul Nitze document from the ’50s called National Security Document 68, which was classified for about twenty years until Dr. Kissinger declassified it. But the document back then, in the ’50s, provided a grim picture of where the status of the world was between two, you know, ideologies or two systems, which was the—you know, the communism at that time and democracies, and the steps we needed to undertake to really win that competition.  

And so what we did is—this is our second-biggest document in the last two-plus years. We wanted to outline what are the necessary steps that we have to undertake between 2025 and 2030, where we believe is the most critical period in this decade if not in this century, in the technoeconomic competitions against China. If you look at what they’re—what they’re doing, their intent, their strategy, and their resources, you can clearly see that they’re going after becoming a worldclass technology superpower. And I think in that competition, you know, we have a lot to lose, because those that end up building and using these technologies will set the rules of the road, not just for themselves but for the rest of the world. And I think our report really aims to provide a path towards how do we compete and how do we win in this technoeconomic competition. 

THOMPSON: All right, briefly explain the path you choose—the path you recommend.  

BAJRAKTARI: Yeah. So bottom line, we identified three pillars that we have to really get it right. Number one is, in the ’50s we came up with what is called Endless Frontier document, which set this foundation for our innovation ecosystem in our country—where universities, private sector, and academia really came together and competed more effectively against Soviet Union. So we argue that you need to really foster that ecosystem, that you bring together—in this age of AI, you bring together universities. And we are now at the time that they need compute power. Most of the compute power, Nick, as you know, really resides with private companies right now. And so for us, to stay ahead you need to foster a much, much closer ecosystem between universities, private sector, and government. 

There, we need to be organized differently for this competition. You need a new private-public relationship in terms of how private sector and government interacts with each other. And that would foster that ecosystem that we believe would identify—would lead towards us leading in six critical technologies we identify that are a must for us to win in this competition. And that is AI, chips, obviously, advanced networks—which is 5G, 6G, and everything that has to do with connectivity—the future of energy, biotech, and the advanced manufacturing.  

And so once you create that ecosystem where private sector, academia, and government can really foster that relationship of innovation—we argue in our report that military power, economic power, and soft power are no longer sufficient for you to lead in this world. That we need innovation power, a term coined by Eric Schmidt our chair, in which he says: You have to fast to create, adapt, and integrate technology. And so that’s the first pillar.  

The second pillar is, you know, we live in a geopolitical situation right now in which the axis of disruptors, as we call them—Iran, North Korea, China, and Russia—have deepened their relationship on almost every domain—economic, financial, military, and foreign policy. And so for that, we need to be organized differently in terms of our tools that we have, or how do we execute foreign policy, military operations, intelligence operation. So pillar two of our report really outlines the steps you need to take once you have these capabilities in the technology space, how you utilize them for the purposes of your national power, and how you use them with your allies and partners to build, you know, a set of alliances and frameworks to compete against the axis of disruptors.  

And then the last pillar is, you know, we are on the brink of, like, a major AI revolution. And we already see it. Almost every six months we have better models, more powerful models. So what is the future of that economy going to look like? What are the key ingredients that we have to build? What kind of a workforce, what kind of education you need to build for that future? And I think the third pillar really goes after the key elements of what kind of a technoeconomic foundation you need to build, in our country as well as with our allies and partners, so we can win in this technoeconomic competition with China.  

THOMPSON: All right. Zoë, so one of the critiques that was made of NSC-68 is that it did lay out a framework for the United States in the Cold War, but in fact it also helped create the Cold War. And that there was a moment when you could have had a different relationship between the United States and the Soviet Union. Do you have any fear that pushing forward in this very aggressive stance, as recommended on this stage and seems to be policy of our government stretching back through two administrations, is actually leading to a split with China that is negative? Or do you believe this is the right direction to go? 

BAIRD: Well, let me first say that I’m not going to speak for the U.S. government now that I’m no longer in the government, which is a freedom for me. But also, I just want to make that very clear. 

You know, Ylli’s work and the work of his organization is always really important. And this is another example of it. I think that it’s important to start with the assets that the U.S. and our allies have in thinking about this, that it’s not a binary question. And I think part of what Ylli’s report focuses on, which I would encourage the audience to take away, is what are the elements of America’s innovation edge, what are the elements of our collaboration with allies around that innovation edge? You know, everything ranging from open spirit—you know, the open spirit that enables people to innovate, but also the extraordinary importance of the access to capital in both the private markets and in government, as we’re increasingly coinvesting with the private sector. 

Education, as Ylli mentioned. The ties between the government and our research institutions, which are so important, and which bring in so many others from around the world so that we build these alliances from the bottom up. The businesses that are prepared to try new things, experiment with things like AI, and what does it mean for productivity, for business process, for products? 

And so in looking at this issue that Ylli focuses on in terms of the U.S.-China issue, and the question you raise about are we pursuing or creating the conditions for another Cold War, I think you have to look at how we build our strength, as well as how we address the interests of China. And clearly, we are taking a number of actions to try to prevent our strengths from empowering them. And so that’s why I think it’s really important to tie these two together. We have, in the U.S., all the leading advanced frontier AI developers. We are deeply investing in—coinvesting with the private sector in rebuilding our semiconductor capacity, and the unique capability of Taiwan Semiconductor to produce the highest-end chips. So we really have to look at both sides of the equation. 

THOMPSON: So maybe let’s go there. So— 

BAJRAKTARI: Can I add one thing here? 

THOMPSON: You can add anything you want. 

BAJRAKTARI: So this whole thing about whether or not—and Zoë right to point on our advantages. But if you look back from 2016, ’17, and ’18, China has been pretty firm about the world they want to see going forward. So it’s not that, you know, these actions that we are now undertaking are happening in a vacuum. It’s that, you know, when you look at their strategy document, if you look at the money they have been investing since that period, you know if you look at their ambitions for Made in China 2025 being a global AI leader by 2030, today Xi Jinping said they want to be the global S&T leader by 2035. This is the world that they have be pursuing, you know, for a couple of years now.  

I think what we have done on our side is course correct. And we did it in two ways. Number one is, how do we protect some of the things that, you know, Zoë was talking about—you know, the technology, the cutting-edge technology, which is, you know, in semiconductors. And that has been reflected in our export control policies, primarily. And then also, I think we have been promoting a lot of these investments to bring back some of these cutting-edge technologies, number one. And number two is really enabling and fostering this innovation where we remain ahead. This competition will be ongoing. This is not a static competition. China will pursue all avenues to get ahead in whatever they can. You know, by investing, by doubling down, despite all our policies. But these policies will enable us some space to run faster.  

THOMPSON: But—either of you can answer this—but so China made AI a national competitive goal or strategic goal well before we did. We weren’t even really caring about it at all, you know, back in the Obama administration. Kind of makes it a huge priority. And yet here we are in 2024, look at the Hugging Face rankings. All the large language models that are the best are American companies. We’ve got Nvidia chips, we’ve connections to TSMC, we have ASML. We are—like, we are ahead in everything. We have all the top research in artificial intelligence. Why are you so worried? It’s going great, right? You know, we passed all these export controls. They can’t even manufacture the chips because some of the work that Secretary Raimondo and others did. Like, why do we need, you know, a new NSC-68? Either one of you can answer that question. 

BAJRAKTARI: No, no, I’ll start and Zoë can help. 

Look, when we look back in 2016 and ’17, as I said, this technology—Xi Jinping came out with the list of technologies they want to dominate. AI was one of them. If you look at that list, on some of the areas they got ahead of us—solar, 5G, and on digital infrastructure, for example. And so in some of the areas, because we closed our eyes, because we were not paying attention or we were pursuing different, you know, relationship with China, they got ahead of us. What we’re trying to do here is from now on we should pay attention to the list of these technologies, because they matter. They matter for the future of our country. They matter for the future of our economic prosperity.  

But also, who builds these technologies ends up developing the standards, you know, pursuing, you know, global ambitions, like they did with 5G. If you looked at Huawei’s 5G map, until we started actively slowing them down or the expansion around the world, most of the global 5G map looked red—apart from our country, the Brits, Australians. But most of the countries started using Huawei, their antennas, their infrastructure. So our argument is that we need to double down on our advantages because they’re not going to slow down. They’re going to pursue an aggressive path to get ahead on a lot of these technologies that matter.  

THOMPSON: Very quick question with a one-word answer. From zero to 100 percent, how confident are you that Huawei built backdoors into all that infrastructure?  

BAJRAKTARI: I think they’re probably doing it right now.  

THOMPSON: One hundred percent?  

BAJRAKTARI: Yeah. But are they going to be successful in achieving, you know, our levels of technology? Not yet, I think. 

THOMPSON: OK. So we have this very dichotomous framework here on stage, back here on 68th Street. Very much, United States versus China. You think that is the right framework, or should we be looking at it as West versus East? Should there be a different sphere that we’re looking at? How should we be thinking about the global alliances on tech policy?  

BAIRD: Yeah. I think we really need to be furthering U.S. leadership with a very broad range of allies around tech policy. And that’s really what we’ve been doing in AI, for example. And I’ll just use that as an example, but I’m happy to talk about other things. But, you know, the Europeans have passed an AI regulatory framework, which they’re just beginning to implement. But we’ve taken a very, I would say, American approach that builds on those assets I talked about before, where the U.S. has started its real consideration of regulation of this advanced technology, AI, by working with the companies that are the leading AI developers, and by appreciating that these companies have a depth of knowledge that it would be very difficult—(coughs)—excuse me—for any government to have. 

And so we started out by creating a set of voluntary commitments by advanced AI developers, both for information that’ll be shared with the government as well as limits that they’ll put on testing and involvement of others in testing. We took that then to the G-7, where it’s become the G-7 Code of Conduct. And the G-7’s now developing its efforts to oversee, monitor, observe that regulation. And Ylli referred to this, and it’s in his report. These technologies, if we’re really going to succeed in the most robust benefits as well as the greatest protections against risks, including risks from China, we’re really going to need this new kind of collaboration with business and government.  

And it’s been very interesting to see how other countries, including Europe, the EU, which have not been familiar this kind of collaborative regulation and information sharing, are joining the U.S. in pursuing that as at least the initial way we’re approaching these issues. Obviously, we need to develop regulatory processes and, you know, the laws that will govern this as well, not just voluntary measures. But this interim experiment is really enabling us to build strong alliances around the world. It’s not just with Europe and European countries, but it’s with Singapore and Korea and, you know, just very broadly around the world, to develop a way together of advancing those who join this alliance. And, you know, that includes the development of really technical knowledge in other countries too, so that we can have a collaboration that is the place people want to be. And I think we’ve begun that very successfully.  

THOMPSON: So let me ask you about—one of the things that happened early in the Cold War. We built a very aggressive policy towards Russia. Obviously, strategy of deterrence. Built up our nuclear arsenal. But we also constantly had conversations back and forth, right? Paul Nitze, author of NSC-68 and my grandfather, spent his entire career negotiating arms deals with the Russians, after writing that document. Have we set up such a system with China so that if there is a moment where AGI breaks out, or if there is some catastrophic or scary moment in AI, we have the bilateral relationship in such a way that we can diffuse it? Or do we not have that? 

BAJRAKTARI: So maybe Zoë can add, since she was much more recently in government than I was. And there was an article about, you know, are government officials meeting with Chinese in Geneva? To which the Chinese didn’t agree to work with us on AI for the military purposes. So I’m not privy to those conversations, Nick, anymore.  

But I would argue, and what we say in our report is, we have to be clear-eyed about the objectives and outcomes of those conversations, first of all. I think what we’ve seen from the Chinese actions from the last couple of years is, you know, promising to work with us on the cyber, you know, hacking tools, and breaking those promises the day after. Working with us on the IP theft, and then, you know, they have stolen—we have had this analysis in one of our previous reports—annually they steal from us the amount of the state of Virginia in GDP. And so I think we just have to be clear-eyed about the competitor we’re facing, number one.  

Number two is to what purpose those conversations would lead, or these mil-to-mil channels, like we had during the Cold War, maybe, with the Soviet military. The one thing that makes maybe today’s reality a little bit different than the Cold War is that this technology is being developed in private sector. And so how do we represent private sector in those conversations, when, you know, the government doesn’t even have this technology being developed in their labs. So it depends, right? 

On their side they have signal fusion, as you know. Their government, CCP, can tell their private sector what to do. We can’t do that, and we will never be able to do that because we’re a democracy, right? So I think I just don’t see a clear objective out of those conversations, to be honest with you, without setting up clear objectives and clear outcomes of what do you want to achieve out of those conversations. 

THOMPSON: It’s a pretty strong position. Zoë, do you agree with that?  

BAIRD: Well, I think you have to attempt to address these issues using every tool you have. And, you know, I’m just speaking for myself now not based on any government knowledge that Ylli alluded to, but I think what you’re pointing out is a tool that we’ve had to avoid unnecessary conflict. On the other hand, we are very clear-eyed as a country, I think. And, you know, I think that is another asset we have. (Laughs.) And our private sector is a great asset to all of this. So I think our best policy is to use all the tools that we have and the experience we have. 

THOMPSON: OK. Interesting. 

BAJRAKTARI: I would argue. Nick, one more thing here is that, you know, whereas our Department of Defense has been fully transparent in the ways they’re going to build and use AI, I mean, if you look at the first memo by the secretary of defense, Secretary Austin, it was on using AI in a responsible and ethical way. We have not seen anything in China in how they’re planning to build and use AI for military purpose. Nor do I think we will see anything coming out of China in this context. It’s just a matter of, like, you know, how we are built as a system completely opposite of how they’re built. And so as I said at the beginning, we just have to be clear eyed about the competitor we face, and the lack of maybe transparency and openness to achieve this objective.  

THOMPSON: Is there anything that could happen in the next couple of years that would dramatically change this conversation? So of the various things that I can think of would be China jumps ahead in quantum computing, China has a breakthrough in fusion that we don’t have, China takes over Taiwan and gets control of TSMC. Which of these three things do you think is most likely and most worrying to you? Yeah, both of you can answer that question. 

BAJRAKTARI: I think all of them are worrying, Nick, to be honest with you. I mean, not that I’m deflecting your question, but I think in my mind probably the biggest concern would be China getting ahead in AGI. If you look at all these models, if you look at the conversations happening right now, the amount of money going, the amount of, so, like, ambition to get towards AGI, or any kind of models that resemble AGI, I think that would be probably the most worrying thing for me, since I worry about the technoeconomic aspect of the national security. So I think they know this is a full speed ahead competition in who gets there first on AGI. Whether or not that AGI is a combo of quantum, some new advanced algorithms, some new data that can break the current data walls, that’s to be seen. But I think they are pretty clear-eyed about the current state of play regarding the next models and how powerful the next couple of models could be.  

THOMPSON: Zoë, do you agree with that synthesis? 

BAIRD: Well, you know, I think all of the issues you raise are worrying. And I would add to that, our inability to meet the energy challenges that we have along with the climate change objectives. But I would also say that we can’t look at this issue in the abstract without looking at the history of our managing the relationship with China. And, you know, cybersecurity has been a huge issue before this, and our economy has not collapsed. And so I think it’s really important to address these issues with the kind of ambition that Ylli outlines in his report, the ambition to build our capacities and awareness. But I think that the takeaway from this event should not be that we should be afraid of what China’s doing. I think we have to step up in a clear-eyed way. And that the ways that we need to step up are very aligned with the ways that we need to build the strength of our own country and our alliances. 

BAJRAKTARI: Let me—can I just double down? Because I think Zoë is right to point on the—on the positives. And, you know, we live in a highly polarized environment right now, so the first chapter—and if you remember this from NSC-68, is the fundamental principles of United States.  

THOMPSON: Yeah, I remember that.  

BAJRAKTARI: And so we put the fundamental principles of United States in our document as well, where we highlight our economic advantages, our military advantages, and our innovation power that Zoë has alluded to. Yes on the political front we live in a year that is an election year. So we acknowledge that. But if you look across all the other domains, we still lead heavily. But that lead should not be taken for granted. So we have to continue out-innovating and continuing on the path of being the number-one innovation power that we are known for.  

THOMPSON: Let me ask you about energy policy, since it’s come up a couple times. So one of the things that interests me about energy is that in AI, I understand why we’re, you know, diverging into a world of U.S. tech, and Chinese tech, and Western tech, and Eastern tech, and they shall not meet. When it comes to climate change, it would be really great if we could share all of our innovations, how to make the most efficient curved solar or the most efficient geothermal. Do you worry that the technology—maybe I’ll start here with Zoë, since I haven’t gotten to start as many of these questions with you—do you worry that moving into a somewhat antagonistic, hyper-competitive view on AI and technology will also inadvertently slow down our progress towards solving global climate change problems? 

BAIRD: No. I don’t, really. I think the real issue is the need to recognize the huge amounts of energy needed for AI and potentially other technologies, and that that hasn’t been factored into the climate change discussions as much as it needs to be. And I think that’s where the competition will come, is having the capacity. So we need to be looking at, you know, what are the allied countries that have enormous capacity for hydropower, and do they want to develop that? You know, advanced nuclear. And I think that’s really where we need to have a very broad strategy that aligns the different investments with government policy in order to meet all these objectives. And creates web, a network, of allies because, you know, hydropower’s a powerful potential in other countries, but not in the U.S., for example.  

THOMPSON: Let me make it—can I make it specific—a more specific version of the question, sort of the too-vague question I asked. Let’s say that we had a huge breakthrough in fusion. Right, oh, my God, like, we figured it out, right? Scientific breakthrough at one of the, you know, California companies that’s doing it. Wow, now we can, you know, power our AI much more efficiently, right? And right now if we’re behind in energy we’re ahead in chips, maybe this allows us to get a bigger edge. Would you put export controls on that fusion technology, even if China getting it would be hugely helpful for climate change? 

BAJRAKTARI: I mean, look, even when you look at the export controls currently, they’re not towards all the chips. It’s for a specific cutting-edge semiconductors that have a military application. So, I mean, the language is pretty precise if you look at the policy document. 

THOMPSON: Totally. Totally. 

BAJRAKTARI: And so my point is, look, what would be—most of these technologies are dual purpose, as you know. And so for what purpose you going to use it? You know, who’s the end user? You know, are you going to use it for some kind of military operation, military purpose? I think those are the questions that usually, you know, our government goes through before they come up with the policies of where to put export control policy.  

THOMPSON: But so here with fusion it would be used to power China’s—all of China’s AI technology, which would effectively have some military uses. So there would be a pretty good case for export controls on fusion tech.  

BAJRAKTARI: I mean, it’s a hypothetical question. (Laughs.) So I don’t— 

THOMPSON: Yeah, but we’re at the—it’s lunch. Come on. Like, what are we—it’s not like—we’re not actually in a meeting. 

BAJRAKTARI: (Laughs.) I know. But, no, I really think you have to look at it at the context of when it would happen. You know, if you look at, as I mentioned at the beginning of this conversation, everything in the last—in the next five years pinpoints to China having serious ambitions on the technology front, towards Taiwan, and everything else. So the fusion conversation would not happen in isolation. It would be part of the broader geopolitical conversation that is happening.  

Look at their relationship with Russia, North Korea, and Iran today, for example. We wrote the last big report in October of 2022. You know, the deepening of the relationship that you have right now between these four countries were completely different two years ago. So if you look at the economic, military, financial, and foreign policy domain, they are now much, much closer than we observed two years ago. And so I think you cannot just in isolation look at a particular export control policy on a certain technology without looking at the geopolitical broader context.  

THOMPSON: OK. I have several hundred more questions, but we’re at halftime.  

BAJRAKTARI: (Inaudible.) (Laughs.) 

THOMPSON: I have lots of—lots of—yeah. (Laughs.) Lots of theories. We could go to the bar afterwards. But I would like to invite members to join our conversation with questions. And, Ms. Dyson, in the front, let’s begin with you. 

Q: Esther Dyson from Wellville. 

I think there are two big elephants in the room. One is the elections, which I’ll pass to someone else. And the other is, call it U.S. business models. Our biggest vulnerability actually is within our own private sector, the ability to manipulate our people. And there are two vulnerabilities here. One is our people’s self-awareness and awareness of how they’re being manipulated. And, two, that all these companies are susceptible to flows of money and, as we’ve seen, they tend to not be that careful about where the money comes from. And I’d love to just hear a little more talk about that, and the increasing role of private sector, AKA Elon Musk, in international affairs. You know, what are we going to do about that? 

BAJRAKTARI: It’s a tough question. 

BAIRD: I’d be happy to jump in here, if you’d like. 

BAJRAKTARI: Yeah, go ahead. Thank you. 

THOMPSON: Zoë, you want to start? 

BAIRD: You know, Esther was one of the great observers of the growth of the internet in the early days, when we did not try to develop policy and direct the public engagement with the internet in order to meet broader public purposes. And I think she’s reflecting a lot of the output of that. She was the first chair of ICANN and great reporter on the industry as well. I think what is different now is that there’s more awareness of what has to happen. And there isn’t a sort of you have to leave the market alone or it won’t grow. It’s quite clear the market will grow. And there are a lot of people in the—in the industry themselves who are very anxious about what’s happening and want regulation. So I think it’s a very different context. 

And we are trying to develop controls over some of the greatest risks in the arena you talk about. And it’s very complicated. I mean, if you look at how we’re fighting the war in Ukraine, how we’re supporting Ukraine, private sector is playing a role that has really enabled the Ukrainians in remarkable ways. The key there has been to try to make sure that’s in collaboration with the government and that individuals in those companies are not making decisions. You know, and some of that has happened. I mean, you allude to Musk. So I think it’s a—it’s a moment where we have the opportunity to get this right. And it is a mix, as Ylli says, of very thoughtful, targeted regulatory actions and conversations with industry that create expectations and accountability, not just support or a hands-off environment. 

BAJRAKTARI: Can I add two things, Nick, to this? 

THOMPSON: Of course. 

BAJRAKTARI: So, number one is I completely agree with Zoë. Just to answer, I think we have a sea change in the public and private partnerships that exist in this country. I worked in the Pentagon in 2016, and ’17, when Google pulled back working with the Pentagon because, you know, there was a petition, they didn’t want Google to be used for military purposes. But if you look at it today, as Zoë was mentioning, most of our tech companies are helping Ukrainians. I know stories of a company that went the night before the invasion to pull all the digital archives of the Ukrainian government out of Ukraine because of fear that they’re going to be hacked and stolen by Russian, you know, forces.  

And so there’s a different momentum that you have now in this space. I think a lot of people have learned the lesson with social media and how that went through. I think when you look at the AI hearings that are Senate organized, when you had all the leaders raising their hand and asking for this to be regulated, I think it’s a different momentum. Now, are we going to be successful? I think we have to see. The AI EO issued by the White House is a really massive document for an executive order. I don’t think I’ve ever seen a longer document that aims to really empower government agencies to go after regulating AI. And I think the next step would be Congress really doing something on the legislation side about this.  

THOMPSON: So there is—I want to go to the end of Esther’s question with about Elon, about Musk. So let’s say that X.ai becomes the most powerful large language model. Like’ he’s hiring lots of good research scientists. Like, it is possible. They have all your tweets. They’re able to train on them. Others can’t, right? They have a lot of data and that other people don’t have. Let’s say that he builds hyper efficient AI. And then he’s like, you know, what? If I can give it to China they’ll, like, let me make more Tesla—(inaudible)—right? And so you could be in a situation where his economic interests are totally misaligned from what you’re laying out. Is there any step that one should take between now and then to prevent that from happening? 

BAJRAKTARI: I think when it came down to frontier large language models, I think the AI EO has a language in terms of, like, what is considered a frontier model and what is just a regular, large language model, right? So I think at that—at that point you have to look at, you know, investments. You have to look at the outbound export controls, the export controls writ large. So there’s a set of policies in which, when somebody wants to buy or sell things, it has to undergo a process here.  

THOMPSON: So you think we’re sufficiently protected against that? 

BAJRAKTARI: I’m not saying sufficiently, because this is a new area in which I think regulation has to move, to be updated, because it’s a new era that we have all encountered since the release of ChatGPT. Look, none of the government agencies are prepared for this era. They’re trying. They’re building this, because the White House has issued a major document and is asking them to go out and put regulations on how to protect against the worst harms, but also how to enable innovation. But I think this is an area that is going to be always dynamic, because the technology is so dynamic. I mean, you just had last week the newest model that beat every other model in terms of capability, right? 

THOMPSON: That’s great. So I prepped all my questions on the subway. It’s a piece of cake.  

Next question here on the back left. 

Q: Oh, hello. OK.  

THOMPSON: Oh, by the way, I forgot to mention this meeting is on the record. Sorry. I should have mentioned that.  

Q: So Christian Rougeau. I’m with CFR. 

I’m wondering if any of you can speak to innovation when it comes to the climate crisis and the actors involved there? It seems to me that they would have a lot of moral capital going forward, if not, you know, being able to leverage that in some other way. But, you know, getting actors to, you know, work in ways, stepping outside comfort zones of countries, and everything. If you can just speak to innovation there. 

BAJRAKTARI: Zoë, do you have anything on this? (Laughs.) 

BAIRD: Yeah, I’m not really an expert at all on these technologies, but I do think that the cooperation there will be important to building broader cooperation on technology policy generally. 

THOMPSON: Mmm hmm. Ylli, you want to add anything?  

BAJRAKTARI: I mean, I’ll just add that, you know, I think the intersection of AI and energy provides us with a new momentum, maybe, in terms of how can you use the technology towards, you know, achieving those goals and those objectives? But also, like, the demand on energy from AI will make us—I mean, it will really push us to come up with multiple pathways towards the energy sustainability, whether it’s on fusion, fission, you know, solar. But I think because of the high demand, we have to come up with alternative solutions to this. 

THOMPSON: (Inaudible)—if we don’t. In the middle, please. 

Q: Hi. I’m Lauren Wagner. Oh, are we standing? I’ll sit down. 

THOMPSON: You can stand.  

Q: OK. (Laughs.) I invest in AI companies for a firm called Radium Ventures.  

I was wondering how you think about the state of California, specifically Senator Scott Wiener, proposing new legislation called SB 1047, that would likely curtail AI innovation in the state, and particularly open source. So you spoke about maintaining competitive edge on a national scale. And we see a state representative kind of frontrunning the national government in terms of very restrictive and sweeping legislation. How do you think about that when it comes to your priorities? 

BAJRAKTARI: So I haven’t followed, to be honest with you, California state legislation that carefully. But in our report, we talk about proprietary models and open-source models. And I think both require a close monitoring and seeing, like, what are the objectives and how you can regulate them. I think on the proprietary models, I think this is much, much easier, maybe. But on the open source, I think that’s a different way. And I think the risk is also different for both models, as you know. Open source, from the release of these open source models. You know, everybody can use it. I think when you look at—I think I was reading this article yesterday that the progress China has made on AI was done largely because of the access to open-source models.  

THOMPSON: Yeah, yeah. They built it based on it modeling— 

BAJRAKTARI: Yeah. Exactly. I was in India earlier in the year and I think they were pretty adamant that, you know, they are counting a lot of their progress on open-source models. 

THOMPSON: Wait, wait. So this would suggest that maybe you’re aligned with the California legislation, I would have thought intuitively you were quite opposed to, right? Because California legislation, you know, AI advocates say it’s quite bad for AI innovation, but it does have this effect that it could really shut down open source. So would you be in favor of that? 

BAJRAKTARI: In shutting down the open source? 

THOMPSON: Yeah, in the California legislation. 

BAJRAKTARI: So I haven’t read it, as I said. So I’m not going to comment— 

THOMPSON: It doesn’t say—the authors of the bill did not say that. 

BAJRAKTARI: But I would say—like, and our report—and our report says, like, look, there are benefits to the open source model, and you know the risk. You know, we have—we have benefited by the open research, open innovation, you know, all the papers have been published online by our companies within that the last couple of years. So I think, you know, shutting down the entire open source is probably a bridge too far at this point. But I think you have to find a way to mitigate the risk. 

THOMPSON: Super, super interesting. Tight here, just to Lauren’s right. 

Q: Hi. Munish Walther-Puri. Nice to see you, Ylli. Hi, Zoë. 

Thank you for a robust discussion. And I read the report. I think it’s visionary in some important ways. I wanted to push on, we’ve been talking about regulation, and there’s a good reason. Europe came up once here. And you asked a good question, I think, which is: Is U.S. versus China the only frame? I think if you’re in Europe, there’s another frame. And so I wanted to ask you, how do you think about the European frame? Both in regulation and in competition. And who are the U.S.’ other competitors, besides China? Thanks.  

BAJRAKTARI: Yeah, no, that’s a great question. And if I may, I’ll defer to Zoë, because she has done a lot of work in terms of building the international partnerships and alliances on the AI issue. And I think this is not a sole U.S. versus China model. In our—in our report, if you’ve seen it, we argue pretty strongly that this is democracies versus autocracies model, in which, you know, you have one model that is us and our allies and partners—and I’m not including just Europeans, but including, you know, India, South Korea, Singapore that Zoë mentioned, Japan, many other countries around the world—they subscribe to the model of governance that we have.  

So the argument here is: How do we democracies come together much closely on building and using this technology? Yes, there are, you know, differences in how Europeans view innovation versus regulation and how we do it. But I think there’s so many other things that bring us together than, you know, sets us apart. And so bridging those differences. You know, for many, many years we have argued that EVs are going to flood the European market. And it’s going to cause them a big harm in their manufacturing capability. Look at what China is doing now with the export of these EV models, right, by hurting primarily Europeans, writ large, in their manufacturing capabilities. 

So I think we have to, as democracies, come together, agree that there is a set of technologies that we together have to stay ahead. We have to come up with comparative advantages that each of the countries bring to the table. Some countries are going to be much bigger on solar, on fusion. The others will be better on hydropower. And so what can—how can we build on each other’s competitive advantages? You know, people talk about building these AI clusters. These AI clusters will be built, you know, among democratic nations based on their competitive advantages, that we can all utilize, then the algorithms, the data, and the progress. But we should be clear that this is a systems against systems of beliefs, values, and rule of law. 

BAIRD: Yeah, I think Ylli’s right. The real issue here is how to build the strength of countries that share values. And Europe is an interesting case. I think we’ve been playing a very important role there, as there’s been some tension between European—you know, individual European nations and the EU itself, because there’s a very strong desire—I mean, you see this with the French investments in Mistral and in developing an open-source advanced AI industry. And you see it in Germany too. There’s a great deal of interest in some leading European countries in developing their own leading AI, advanced AI companies.  

At the same time, there is an EU approach to regulation historically that could be applied in this arena. And there’s a tension in that regard, I think, that is playing out in Europe. And you see it also in other sectors. I mean, for example, we really need European companies, as well as Japanese and Singaporean and, you know, companies from all over the world, to be part of the supply chain for our semiconductor industries. And so we really need to build new business and government relationships in order to strengthen the coalition of shared values.  

And that coalition is very broad. It has a lot of potential new entrants from countries that have not been deeply engaged in some of the other arenas where we have collaborated with Europe and some other countries. But it also—there’s also a real tension with countries that don’t share our values. And we can’t underestimate that either. 

THOMPSON: Tina, in the back. 

Q: Thank you. Tina Bennett, Bennett Literary.  

I’m just wondering if, in addition to the important categories that you’ve identified of AI, energy, et cetera, does space factor in? And I ask because I have an adorable nephew who’s a junior flight controller at NASA on the Artemis mission. And my family recently went down and we had a tour of, you know, the Houston Space Center, the Johnson Space Center. And it’s thrilling. And he’s young, and patriotic, and loves his job. But I couldn’t help but notice, walking around, that it looks a little—I won’t say beat up and sad, but a little beat up and sad.  

And I know that a lot of the young talent is very dazzled by SpaceX. And it’s not easy to find people like my nephew, who sign up for NASA when they could go to SpaceX. Which has a very different philosophy about innovation, in addition to the fact that it’s under the control of a private individual rather than—who is effectively as powerful, or will be soon, as a sovereign state, it seems to me. And we’ve seen this with Starlink. So question—I don’t—I don’t want to make this another question about Elon Musk, although he tends to come up. He’s kind of unavoidable. But is space still an important category in your thinking of competitiveness?  

BAJRAKTARI: No, absolutely. I mean, if you look at—if you look at what our adversary has done in space in recent years, enormous progress. I mean enormous progress in terms of launches, in terms of, you know, capabilities. But I would also argue, you know, space on our side is not just happening in private sector. You know, we have a—we have stood up, probably in decades, a new service in the military of Space Force. Which I think it a recognition of how important space is for us, not just in terms of communication, command, and control and everything, you know, that our military needs, but how much the importance of space has grown in terms of just the geopolitical competition. 

Because China knows space is really important. And so we hosted recently the space commander in our offices. You know, and when you look at what they’re doing, if you look at the number of recruits they have, there’s a lot of energy also—not just serving in private sector, but also serving in the military on the Space Force. So this is an area in which, you know, as I said, we haven’t stood up a service in a probably long, long time. And I think this is a remarkable moment for us, you know, standing up a new force like that, that will bring people to focus on one domain alone. 

THOMPSON: Yeah, right there in the middle there, the gentleman. 

Q: Henry Breed. 

I have a question for you that goes back to earlier comments on our multilateral relationships and interests. Clearly there’s fluctuation in our position that has gone on and is going on. There’s the quote from President Biden when he first came in that he would go to multilateral meetings and try to assure partners that America is back, only to be asked, yes, but for how long? How are you dealing with some of those mistrusts or worries or concerns in the relationships you’re trying to build, or suggesting strengthening?  

BAJRAKTARI: Yeah. No, that’s a great question. And I think I was starting to think about this earlier. If there’s one topic in the middle of this political polarization environment we live in that brings together all sides in Congress and has been continuing for the last two administrations, it’s the technology competition with China. I worked on a national security commission on AI from 2019 to 2021. We were able to pass fifty-five pieces of legislation because both sides of the aisle in Washington recognize that we need to move fast on this technology space. Part of that was also the CHIPS—the famous CHIPS Act that, you know, we invested $52 billion. And I think if you look at policies, you know, that started with Trump and then continued with Biden administration, I think technology continued to serve as, you know, a really important place in that competition with China.  

And I think this will continue even beyond January of next year, because I think this is where the center of competition is happening, as I mentioned from the beginning. China knows this. I think we are understanding that we are in a generational competition over the set of technologies that will dominate, not this decade but the rest of the century. And so we have to get our act together. I think obviously harmonizing policies with our key allies is a key here. I think Zoë mentioned export control policies that required us to harmonize our policies with the Dutch, the Japanese, and the Koreans.  

And I think all these things will continue, because I think we’re going through a massive global market reordering where companies are leaving China. Companies are moving their assets, their fabs elsewhere. And I think this is the moment in which we, with our allies and partners, need to create that new vision. How will the world look like for the next five to ten years, in which we, democracies, build the technology of the future, you know, and compete much more effectively against Chinese ambitions and their technology lists? 

THOMPSON: So are you America’s last— 

BAIRD: I’d like to address that. 

BAJRAKTARI: Zoë wanted to add something.  

THOMPSON: (Laughs.) OK, sorry. Jump in. 

BAIRD: Sorry, I was just jumping in on that. I couldn’t hear your question if I talked over it. But I think that the picture’s a little more complicated than that. And it’s a very good question. Certainly, a lot of the rest of the world is paying attention to that question. I would just suggest that in these four years, or three and a half, or whatever, that President, Biden’s been president, he has led us to the place where we’re able to have this conversation. And I think we have to recognize that. Ylli discuss how AI policy wasn’t addressed in earlier administrations. And this conversation about how we’re bringing industries back to the U.S. on key technologies, how we’re protecting—using our export control mechanisms, we’re using muscles we haven’t used in years in export controls in order to prevent these technologies falling into China’s hands.  

The investments we’re making in industries, coinvesting with industry, building out broadband so the innovators of the country can all participate, working with our allies to develop supply chains, some of this we just recognized after COVID was broken. But the ambition that we’ve had to both power American innovation and also rebuild the foundation for America’s strength has really been exceptional. And you know, I would say that I don’t know that President Biden gives enough credit for that, really building the future. But I don’t—I don’t think we can take the question you asked and view it as a neutral. I just don’t think we know what would really—whether the emphasis in the future will be on the tensions and conflict that you mentioned when you talk about powering a new Cold War, or whether we would continue these policies which have been so foundational in capitalizing on the potential of America’s strength. 

BAJRAKTARI: Can I add one more thing here to your question and Zoë’s point? I think we also didn’t mention—another key factor is the countries in the rest of the world, also known as the Global South, in which I think how we position with our allies and partners, the relationship we build with the rest of the world, and how we counter China there will matter in the next five years. Obviously, China, for the last couple of years, has gone there, offered them technologies, heavily subsidized with huge promises. I think now, you know, with us working closely with our allies on the set of these technologies, we can give them the choice. You know, an alternative to the Chinese model. You know, for years they offer their 5G model. We still don’t have a 5G American model unless we partner with Scandinavians, or Japanese, and we go and build and help these countries build their own connectivity. 

THOMPSON: Sounds a lot like we’re back in, like, 1951 again. Goes back to my first question.  

BAJRAKTARI: Your grandfather would be— 

THOMPSON: He’d be very, very proud of your—of your ideas. 

All right. Last question, here at the second table here on the left. 

Q: (Off mic)—for CAMMVets Media. I should mention that we cover West Point, and they just graduated one cadet into the space force this year last month. So a big part of the future in defense.  

We’ve been following kind of some of the air defense issues, the thought about hypersonic missiles is a concern. Who has the edge? Everybody’s kind of understated, but following the defense against the Iranian missile attack in April against Israel, the 300 missiles that were knocked down, constant concern about tracking North Korea’s missiles, is the answer on keeping ahead that edge in technology working with allies? You certainly have the Australia, you know, submarine deal that’s out there. Thank you. 

BAJRAKTARI: No. I mean, I think all the elements you describe are—I think we’re observing also in Ukraine, in terms of how, you know, warfare is changing and how these technologies are, you know, bringing new dimensions to conflict. Whether or not before conflict, in terms of preparation of the environment with, you know, space, cyberattacks. During conflict with many small, you know, distributed network drones. And then the human-machine teaming in some of this aspect, where, you know, one can control many capabilities at once.  

So I think we’re at the beginning of changing in the character of war. Maybe not so much maybe in the nature of war, as my colleagues would say. But I think, you know, one thing that we have argued in our report is that the speed and scale is changing in warfare. I think we have pushed Pentagon through all our workings to adopt these technologies faster. The future of war is becoming also predominantly on the software side much more heavier. And I think, as you know, Pentagon is not notorious for using the latest and greatest software. So how do they buy, how do they adopt, and how they integrate software much, much faster, and the capabilities, matter for the future of warfare. And so we are on that brink. And I think AI could be a catalyst for the Pentagon to try to move faster in this space.  

THOMPSON: Zoë. 

BAIRD: I really don’t have anything to add, thanks.  

THOMPSON: All right. Well, on that sunny and cheerful note, let’s all head off into a lovely June day. (Laughter.) Thank you so much for joining us. Thank you to Zoë. Thank you to Ylli. (Applause.) Video and transcript will be posted online. That was awesome. 

BAJRAKTARI: Thanks, Nick. Thanks, man. 

(END) 

This is an uncorrected transcript. 

Top Stories on CFR

 

Sudan

More than a year into the civil war in Sudan, over nine million people have been displaced, exacerbating an already devastating humanitarian crisis.

Iran

The contest to replace Ebrahim Raisi, killed in a helicopter crash last month, is dominated by conservatives who have provided few signals of any major course change in the country’s regional and security policies.